Pre-training on dynamic graph neural networks
نویسندگان
چکیده
The pre-training of the graph neural network model is to learn general characteristics large-scale graphs or similar type usually through a self-supervised method, which allows work even when node labels are missing. However, existing methods do not take temporal information edge generation and evolution process into consideration. To address this issue, paper proposes method based on dynamic networks (PT-DGNN), uses task simultaneously structure, semantics, features graph. mainly includes two steps: 1) subgraph sampling, 2) using tasks. former preserves local time-aware structure original by sampling latest frequently interacting nodes. latter observed edges predict unobserved capture evolutionary network. Comparative experiments three realistic datasets show that proposed achieves best results link prediction fine-tuning ablation study further verifies effectiveness above steps.
منابع مشابه
dynamic coloring of graph
در این پایان نامه رنگ آمیزی دینامیکی یک گراف را بیان و مطالعه می کنیم. یک –kرنگ آمیزی سره ی رأسی گراف g را رنگ آمیزی دینامیکی می نامند اگر در همسایه های هر رأس v?v(g) با درجه ی حداقل 2، حداقل 2 رنگ متفاوت ظاهر شوند. کوچکترین عدد صحیح k، به طوری که g دارای –kرنگ آمیزی دینامیکی باشد را عدد رنگی دینامیکی g می نامند و آنرا با نماد ?_2 (g) نمایش می دهند. مونت گمری حدس زده است که تمام گراف های منتظم ...
15 صفحه اولDynamic Optimal Training for Competitive Neural Networks
This paper introduces an unsupervised learning algorithm for optimal training of competitive neural networks. The learning rule of this algorithm is derived from the minimization of a new objective criterion using the gradient descent technique. Its learning rate and competition difficulty are dynamically adjusted throughout iterations. Numerical results that illustrate the performance of this ...
متن کاملAuto-encoder pre-training of segmented-memory recurrent neural networks
The extended Backpropagation Through Time (eBPTT) learning algorithm for Segmented-Memory Recurrent Neural Networks (SMRNNs) yet lacks the ability to reliably learn long-term dependencies. The alternative learning algorithm, extended Real-Time Recurrent Learning (eRTRL), does not suffer this problem but is computational very intensive, such that it is impractical for the training of large netwo...
متن کاملNeural Transplant Surgery: An Approach to Pre-training Recurrent Networks
Partially-recurrent networks have advantages over strictly feed-forward networks for certain spatiotemporal pattern classification or prediction tasks. However networks involving recurrent links are generally more difficult to train than their nonrecurrent counterparts. In this paper we demonstrate that the costs of training a recurrent network can be greatly reduced by initialising the network...
متن کاملPre-training of Recurrent Neural Networks via Linear Autoencoders
We propose a pre-training technique for recurrent neural networks based on linear autoencoder networks for sequences, i.e. linear dynamical systems modelling the target sequences. We start by giving a closed form solution for the definition of the optimal weights of a linear autoencoder given a training set of sequences. This solution, however, is computationally very demanding, so we suggest a...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Neurocomputing
سال: 2022
ISSN: ['0925-2312', '1872-8286']
DOI: https://doi.org/10.1016/j.neucom.2022.05.070